26 research outputs found

    CNN AND LSTM FOR THE CLASSIFICATION OF PARKINSON'S DISEASE BASED ON THE GTCC AND MFCC

    Get PDF
    Parkinson's disease is a recognizable clinical syndrome with a variety of causes and clinical presentations; it represents a rapidly growing neurodegenerative disorder. Since about 90 percent of Parkinson's disease sufferers have some form of early speech impairment, recent studies on tele diagnosis of Parkinson's disease have focused on the recognition of voice impairments from vowel phonations or the subjects' discourse. In this paper, we present a new approach for Parkinson's disease detection from speech sounds that are based on CNN and LSTM and uses two categories of characteristics Mel Frequency Cepstral Coefficients (MFCC) and Gammatone Cepstral Coefficients (GTCC) obtained from noise-removed speech signals with comparative EMD-DWT and DWT-EMD analysis. The proposed model is divided into three stages. In the first step, noise is removed from the signals using the EMD-DWT and DWT-EMD methods. In the second step, the GTCC and MFCC are extracted from the enhanced audio signals. The classification process is carried out in the third step by feeding these features into the LSTM and CNN models, which are designed to define sequential information from the extracted features. The experiments are performed using PC-GITA and Sakar datasets and 10-fold cross validation method, the highest classification accuracy for the Sakar dataset reached 100% for both EMD-DWT-GTCC-CNN and DWT-EMD-GTCC-CNN, and for the PC-GITA dataset, the accuracy is reached 100% for EMD-DWT-GTCC-CNN and 96.55% for DWT-EMD-GTCC-CNN. The results of this study indicate that the characteristics of GTCC are more appropriate and accurate for the assessment of PD than MFCC

    Traffic congestion prevention system

    Get PDF
    Transport is one of the key elements in the development of any country; it can be a powerful catalyst for economic growth. However, the infrastructure does not give enough to the huge number of vehicles which produces several problems, particularly in terms of road safety, and loss of time and pollution. One of the most significant problems is congestion, this is a major handicap for the road transport system. An alternative would be to use new technologies in the field of communication to send traffic information such as treacherous road conditions and accident sites by communicating, for a more efficient use of existing infrastructure.  In this paper, we present a CPS system, which can help drivers in order to have a better trip. For this raison we find the optimal way to reduce travel time and fuel consumption. This system based on our recent work [1]. It´s new approach aims to avoid congestion and queues, hat assure more efficient and optimal use of the existing road infrastructure. For that we concentrate by analyzing the useful and reliable traffic information collected in real time. The system is simulated in several conditions, Experimental result show that our approach is very effective. In the future work, we try to improve our system by adding more complexity in our system

    Features selection by genetic algorithm optimization with k-nearest neighbour and learning ensemble to predict Parkinson disease

    Get PDF
    Among the several ways followed for detecting Parkinson's disease, there is the one based on the speech signal, which is a symptom of this disease. In this paper focusing on the signal analysis, a data of voice records has been used. In these records, the patients were asked to utter vowels “a”, “o”, and “u”. Discrete wavelet transforms (DWT) applied to the speech signal to fetch the variable resolution that could hide the most important information about the patients. From the approximation a3 obtained by Daubechies wavelet at the scale 2 level 3, 21 features have been extracted: a linear predictive coding (LPC), energy, zero-crossing rate (ZCR), mel frequency cepstral coefficient (MFCC), and wavelet Shannon entropy. Then for the classification, the K-nearest neighbour (KNN) has been used. The KNN is a type of instance-based learning that can make a decision based on approximated local functions, besides the ensemble learning. However, through the learning process, the choice of the training features can have a significant impact on overall the process. So, here it stands out the role of the genetic algorithm (GA) to select the best training features that give the best accurate classification

    Performance analysis of a novel OWDM-IDMA approach for wireless communication system

    Get PDF
    Efficiency and adaptivity play a major role in the design of fourth-generation wireless systems (4G). These systems should be efficient in terms of bandwidth and power allocation and will satisfy the users requirement on low power consumption, little interferences with other systems, and high rate transmission. Moreover, low complexity transceivers are expected. This paper will propose a novel multiple access technique called OWDM-IDMA (Orthogonal Wavelength-Division Multiplexing-Interleave Division Multiple Access) as the combination of the OWDM (Orthogonal Wavelength-Division Multiplexing) and IDMA (Interleave Division Multiple Access) schemes. The IDMA and OWDM principles are also outlined. The comparison between the conventional OFDM-IDMA and the proposed OWDM-IDMA is performed in term of Power to Average Power Ratio PAPR, as well as evaluating the performance of our presented technique over Additive White Gaussian Noise AWGN multipath channels by estimating the BER (Binary Error Rate)

    Convolutional Neural Network for Segmentation and Classification of Glaucoma

    Get PDF
    Glaucoma is an eye disease that is caused by elevated intraocular pressure and commonly leads to optic nerve damage. Thanks to its vital role in transmitting visual signals from the eye to the brain, the optic nerve is essential for maintaining good and clear vision. Glaucoma is considered one of the leading causes of blindness. Accordingly, the earlier doctors can diagnose and detect the disease, the more feasible its treatment becomes. Aiming to facilitate this task, this study proposes a method for detecting diseases by analyzing images of the interior of the eye using a convolutional neural network. This method consists of segmentation based on a modified U-Net architecture and classification using the DenseNet-201 technique. The proposed model utilized the DRISHTI-GS and RIM-ONE datasets to evaluate glaucoma images. These datasets served as valuable sources of diverse and representative glaucoma-related images, enabling a thorough evaluation of the model’s performance. Finally, the results were highly promising after subjecting the model to a thorough evaluation process. The segmentation accuracy reached 96.65%, while the classification accuracy reached 96.90%. This means that the model excelled in accurately delineating and isolating the relevant regions of interest within the eye images, such as the optical disc and optical cup, which are crucial for diagnosing glaucoma

    Blind seismic deconvolution with long wavelet impulse reponse

    No full text
    International audienceBlind seismic deconvolution with long wavelet impulse repons

    Statistical Estimation of GNSS Pseudo-range Errors

    Get PDF
    AbstractTechnology of satellite has many advantages over long instruments used nowadays and gives unparalleled precision compared to other systems. Nonetheless, it is full of problems related to the spread in the atmosphere, barriers in the receiving medium, the instability of the clocks used or the receiver's electronic noise of the errors that these phenomena cause may lead to inaccuracies of over to tens of meters. This paper describes methods for estimating the pseudo-range errors based on different statistical filtering. The Rao-Blackwellized filter has given interesting results comparing to the extended kalman filter, but the particle filter with kalamn filter proposal is much better than many other particle filtering algorithms

    Identification et Déconvolution aveugle (Application aux signaux de sismique-réflexion sous-marine)

    No full text
    La déconvolution aveugle consiste à déterminer l'ondelette source et la séquence de réflectivité (la réponse du milieu). Dans cette thèse, nous avons principalement orienté le travail sur la déconvolution aveugle basée sur les critères du maximum de vraisemblance et de la moyenne a posteriori. Pour cela, nous avons appliqué des algorithmes stochastiques de type SEM et SAEM et de type MCMC pour l'approche bayèsienne. Une difficulté bien connue de ces algorithmes est leur sensibilité à l'initialisation de l'ondelette. Pour résoudre ce problème, on a proposé une nouvelle méthode qui permet la détection des optima locaux de l'ondelette, utilisée conjointement au critère du maximum de kurtosis pour le choix de l'optimum global de l'ondelette. De plus, l'application directe des algorithmes pour une réponse impulsionnelle longue de l'ondelette donne une muavaise estimation de cette dernière. Pour remédier à ce problème, on propose une estimation de l'ondelette en deux temps.In seismic deconvolution, a blind approach must be considered when the reflectivity sequence, the source wevelet signal and the noise power level are unknown. Blind deconvolution aims to determine the wavelet source and the reflectivity sequence. In this thesis, we mainly focused our work on blind deconvolution in the maximum likelihood and a posterior via SEM algorithm and Bayesian approach using MCMC method. A well-known difficulty is the sensitivity of these algorithms to the wavelet initialization. A new methods is proposed to solve this problem. It consists in detecting the wavelet local optima, before using the maximum kurtosis criterion in order to choose the wavelet global optimum. In some experiments of practical interest where the wavelet is quite long, wavelet estimators generally have high variances. We propose a new method that overcomes this problem within the framework of classical blind deconvolution techniques. In this context, a two-step approach is proposed.BREST-BU Droit-Sciences-Sports (290192103) / SudocPLOUZANE-Bibl.La Pérouse (290195209) / SudocSudocFranceF

    Modeling of Video Sequences by Gaussian Mixture: Application in Motion Estimation by Block Matching Method

    No full text
    This article investigates a new method of motion estimation based on block matching criterion through the modeling of image blocks by a mixture of two and three Gaussian distributions. Mixture parameters (weights, means vectors, and covariance matrices) are estimated by the Expectation Maximization algorithm (EM) which maximizes the log-likelihood criterion. The similarity between a block in the current image and the more resembling one in a search window on the reference image is measured by the minimization of Extended Mahalanobis distance between the clusters of mixture. Performed experiments on sequences of real images have given good results, and PSNR reached 3&#8201;dB.</p
    corecore